-
Notifications
You must be signed in to change notification settings - Fork 70
feature: block tag support(safe&finalize) #857
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughAdds a new BlockTag subsystem (config, CLI binding, and BlockTagService) that polls L1 and the rollup contract to compute safe and finalized L2 block tags, validates state roots, caches batch indices, notifies Geth via RPC, integrates service lifecycle into sequencer startup/shutdown, and exposes RetryableClient.SetBlockTags. Changes
Sequence Diagram(s)sequenceDiagram
participant Poller as BlockTagService
participant L1 as L1 Node
participant Rollup as Rollup Contract
participant L2 as L2 Node
participant Geth as Geth (RPC)
loop periodic poll
Poller->>L1: fetch L1 head & confirmations
L1-->>Poller: L1 block info
Poller->>Rollup: getLastCommittedBatchAtBlock(L1 block)
Rollup-->>Poller: batch index & committed state root
Poller->>L2: resolve L2 block for batch (binary/forward search)
L2-->>Poller: L2 header & state root
Note over Poller: compare/validate L1 committed state root vs L2 header root
alt tags changed
Poller->>Geth: SetBlockTags(safeHash, finalizedHash)
Geth-->>Poller: ACK / error
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@node/blocktag/service.go`:
- Around line 63-101: In NewBlockTagService, if bindings.NewRollup fails the
previously created l1Client is leaked; ensure l1Client is closed before
returning the error (e.g., close l1Client when rollup creation returns an error
or use a deferred cleanup tied to the rollup creation step) so that l1Client is
not left open on failure.
- Around line 292-335: findCompletedBatchForL2Block can recurse indefinitely
when binarySearchBatch yields an index but forward scanning finds no completed
batch; replace the recursive reset pattern with a single retry loop or a boolean
"cacheReset" flag: when resultIdx == 0, if we haven't already reset the cache,
call setCachedBatchIndex(tagType, 0), set the flag, recompute startIdx via
binarySearchBatch and retry the forward scan once; if it still yields no result,
return an error instead of recursing. Update logic around getCachedBatchIndex,
binarySearchBatch, setCachedBatchIndex, and the forward scan to respect the
single-retry flag.
In `@node/cmd/node/main.go`:
- Around line 147-159: The BlockTagService startup assumes L1NodeAddr and
RollupContractAddress are present but the CLI flags L1NodeAddr and
RollupContractAddress are currently optional; either make the flags required or
skip initializing the service when they're absent. To fix, either (A) update the
flag definitions for L1NodeAddr and RollupContractAddress in node/flags/flags.go
to include Required: true so NewBlockTagService can always rely on them, or (B)
modify the BlockTagService initialization in main.go (the blockTagConfig /
blockTagSvc / NewBlockTagService code path) to check the parsed flag values and
if either L1NodeAddr or RollupContractAddress is empty, log a warning and do not
call blocktag.NewBlockTagService (skip Start()), making the service optional and
avoiding the runtime error.
🧹 Nitpick comments (2)
node/blocktag/config.go (1)
36-39: Consider exposingSafeConfirmationsandPollIntervalvia CLI flags.Currently,
SetCliContextonly readsL1AddrandRollupAddressfrom CLI flags. Users cannot overrideSafeConfirmationsorPollIntervalwithout code changes.Also,
common.HexToAddresssilently returns a zero address for invalid input. WhileNewBlockTagServicevalidates this, consider adding early validation here or logging a warning for better debuggability.♻️ Suggested enhancement
+import ( + "fmt" + ... +) + +// Add new flags in node/flags/flags.go: +// BlockTagSafeConfirmations = cli.Uint64Flag{...} +// BlockTagPollInterval = cli.DurationFlag{...} func (c *Config) SetCliContext(ctx *cli.Context) error { c.L1Addr = ctx.GlobalString(flags.L1NodeAddr.Name) - c.RollupAddress = common.HexToAddress(ctx.GlobalString(flags.RollupContractAddress.Name)) + rollupAddrStr := ctx.GlobalString(flags.RollupContractAddress.Name) + if rollupAddrStr != "" && !common.IsHexAddress(rollupAddrStr) { + return fmt.Errorf("invalid rollup address: %s", rollupAddrStr) + } + c.RollupAddress = common.HexToAddress(rollupAddrStr) + // Optionally read SafeConfirmations and PollInterval from flags return nil }node/blocktag/service.go (1)
154-197:updateBlockTagsreturns nil even when errors occur.The function logs errors for
getL2BlockForTagandnotifyGethbut always returnsnil. This makes it harder for callers to know if updates succeeded. Consider returning an error or using a multi-error pattern if partial success is acceptable.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
node/blocktag/config.gonode/blocktag/service.gonode/cmd/node/main.gonode/types/retryable_client.go
🧰 Additional context used
🧬 Code graph analysis (3)
node/blocktag/config.go (2)
oracle/flags/flags.go (1)
RollupAddress(35-40)node/flags/flags.go (2)
L1NodeAddr(63-67)RollupContractAddress(187-191)
node/blocktag/service.go (4)
tx-submitter/iface/client.go (1)
Client(14-23)node/types/retryable_client.go (1)
RetryableClient(29-34)node/blocktag/config.go (1)
Config(20-25)oracle/flags/flags.go (1)
RollupAddress(35-40)
node/cmd/node/main.go (5)
node/core/executor.go (1)
Executor(31-61)node/sync/syncer.go (1)
Syncer(15-29)node/derivation/derivation.go (1)
Derivation(39-67)node/blocktag/service.go (2)
BlockTagService(33-61)NewBlockTagService(64-101)node/blocktag/config.go (1)
DefaultConfig(28-33)
🪛 GitHub Actions: Tx-submitter
node/types/retryable_client.go
[error] 227-227: rc.authClient.SetBlockTags undefined (type *authclient.Client has no field or method SetBlockTags)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test
- GitHub Check: check
- GitHub Check: Analyze (go)
- GitHub Check: Analyze (rust)
🔇 Additional comments (7)
node/cmd/node/main.go (1)
186-188: LGTM!Shutdown handling correctly checks for nil before stopping the service.
node/blocktag/service.go (5)
103-118: LGTM!Initialization failure is logged but doesn't block startup, which is reasonable for a non-critical service. The polling loop will retry on subsequent ticks.
120-127: LGTM!Stop cleanly cancels context, waits for the loop to exit via the
stopchannel, and closes the L1 client.
269-290: State root validation logic looks correct.The comparison between L1 committed state root and L2 block header root is a good safety check to detect mismatches.
411-435: LGTM!The notification logic correctly skips redundant RPC calls when tags haven't changed and updates the last-notified hashes only after successful notification.
375-393: No action needed —rpc.FinalizedBlockNumberconversion is compatible and follows an established pattern.The same pattern of converting
rpc.FinalizedBlockNumber(a negative constant) tobig.Intis already successfully used elsewhere in the codebase (node/common/layer1.go:24), where it's passed toHeaderByNumber()without any compatibility issues. This confirms the conversion works correctly with go-ethereum's RPC layer andCallOpts.BlockNumber.node/types/retryable_client.go (1)
225-240: TheSetBlockTagsmethod exists onauthclient.Clientand the code compiles successfully. A recent commit (fd7ab3e feature: block tag support(safe&finalize)) explicitly added this functionality to the morph-l2/go-ethereum fork. All methods called onrc.authClientfollow the identical retry pattern shown here, and the method is actively used innode/blocktag/service.go.Likely an incorrect or invalid review comment.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| // NewBlockTagService creates a new BlockTagService | ||
| func NewBlockTagService( | ||
| ctx context.Context, | ||
| l2Client *types.RetryableClient, | ||
| config *Config, | ||
| logger tmlog.Logger, | ||
| ) (*BlockTagService, error) { | ||
| if config.L1Addr == "" { | ||
| return nil, fmt.Errorf("L1 RPC address is required") | ||
| } | ||
| if config.RollupAddress == (common.Address{}) { | ||
| return nil, fmt.Errorf("Rollup contract address is required") | ||
| } | ||
|
|
||
| l1Client, err := ethclient.Dial(config.L1Addr) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to connect to L1: %w", err) | ||
| } | ||
|
|
||
| rollup, err := bindings.NewRollup(config.RollupAddress, l1Client) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to create rollup binding: %w", err) | ||
| } | ||
|
|
||
| ctx, cancel := context.WithCancel(ctx) | ||
|
|
||
| return &BlockTagService{ | ||
| ctx: ctx, | ||
| cancel: cancel, | ||
| l1Client: l1Client, | ||
| l2Client: l2Client, | ||
| rollup: rollup, | ||
| rollupAddress: config.RollupAddress, | ||
| safeConfirmations: config.SafeConfirmations, | ||
| pollInterval: config.PollInterval, | ||
| logger: logger.With("module", "blocktag"), | ||
| stop: make(chan struct{}), | ||
| }, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential resource leak: L1 client not closed on subsequent error.
If bindings.NewRollup fails at line 82-85, the l1Client created at line 77 is not closed.
🔧 Suggested fix
l1Client, err := ethclient.Dial(config.L1Addr)
if err != nil {
return nil, fmt.Errorf("failed to connect to L1: %w", err)
}
rollup, err := bindings.NewRollup(config.RollupAddress, l1Client)
if err != nil {
+ l1Client.Close()
return nil, fmt.Errorf("failed to create rollup binding: %w", err)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // NewBlockTagService creates a new BlockTagService | |
| func NewBlockTagService( | |
| ctx context.Context, | |
| l2Client *types.RetryableClient, | |
| config *Config, | |
| logger tmlog.Logger, | |
| ) (*BlockTagService, error) { | |
| if config.L1Addr == "" { | |
| return nil, fmt.Errorf("L1 RPC address is required") | |
| } | |
| if config.RollupAddress == (common.Address{}) { | |
| return nil, fmt.Errorf("Rollup contract address is required") | |
| } | |
| l1Client, err := ethclient.Dial(config.L1Addr) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to connect to L1: %w", err) | |
| } | |
| rollup, err := bindings.NewRollup(config.RollupAddress, l1Client) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to create rollup binding: %w", err) | |
| } | |
| ctx, cancel := context.WithCancel(ctx) | |
| return &BlockTagService{ | |
| ctx: ctx, | |
| cancel: cancel, | |
| l1Client: l1Client, | |
| l2Client: l2Client, | |
| rollup: rollup, | |
| rollupAddress: config.RollupAddress, | |
| safeConfirmations: config.SafeConfirmations, | |
| pollInterval: config.PollInterval, | |
| logger: logger.With("module", "blocktag"), | |
| stop: make(chan struct{}), | |
| }, nil | |
| } | |
| // NewBlockTagService creates a new BlockTagService | |
| func NewBlockTagService( | |
| ctx context.Context, | |
| l2Client *types.RetryableClient, | |
| config *Config, | |
| logger tmlog.Logger, | |
| ) (*BlockTagService, error) { | |
| if config.L1Addr == "" { | |
| return nil, fmt.Errorf("L1 RPC address is required") | |
| } | |
| if config.RollupAddress == (common.Address{}) { | |
| return nil, fmt.Errorf("Rollup contract address is required") | |
| } | |
| l1Client, err := ethclient.Dial(config.L1Addr) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to connect to L1: %w", err) | |
| } | |
| rollup, err := bindings.NewRollup(config.RollupAddress, l1Client) | |
| if err != nil { | |
| l1Client.Close() | |
| return nil, fmt.Errorf("failed to create rollup binding: %w", err) | |
| } | |
| ctx, cancel := context.WithCancel(ctx) | |
| return &BlockTagService{ | |
| ctx: ctx, | |
| cancel: cancel, | |
| l1Client: l1Client, | |
| l2Client: l2Client, | |
| rollup: rollup, | |
| rollupAddress: config.RollupAddress, | |
| safeConfirmations: config.SafeConfirmations, | |
| pollInterval: config.PollInterval, | |
| logger: logger.With("module", "blocktag"), | |
| stop: make(chan struct{}), | |
| }, nil | |
| } |
🤖 Prompt for AI Agents
In `@node/blocktag/service.go` around lines 63 - 101, In NewBlockTagService, if
bindings.NewRollup fails the previously created l1Client is leaked; ensure
l1Client is closed before returning the error (e.g., close l1Client when rollup
creation returns an error or use a deferred cleanup tied to the rollup creation
step) so that l1Client is not left open on failure.
268bb59 to
68ac8b6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@node/blocktag/service.go`:
- Around line 177-181: The current logic can overwrite the safe block when
getL2BlockForTag(TagTypeSafe, ...) fails because safeBlockNum stays 0; modify
the flow in the function that calls getL2BlockForTag to track whether the safe
block was successfully retrieved (e.g., a boolean safeFound or initialize
safeBlockNum to -1) and only perform the comparison/assignment to safeBlockHash
and call s.setSafeL2Block(...) if the safe retrieval succeeded; reference
getL2BlockForTag(TagTypeSafe,...), safeBlockNum, finalizedBlockNum,
safeBlockHash and s.setSafeL2Block to locate where to add the check and guard
the finalize-vs-safe overwrite.
♻️ Duplicate comments (1)
node/blocktag/service.go (1)
82-85: Resource leak:l1Clientnot closed onNewRollupfailure.If
bindings.NewRollupfails, thel1Clientcreated at line 77 remains open.🔧 Suggested fix
rollup, err := bindings.NewRollup(config.RollupAddress, l1Client) if err != nil { + l1Client.Close() return nil, fmt.Errorf("failed to create rollup binding: %w", err) }
🧹 Nitpick comments (3)
node/blocktag/config.go (1)
38-54: Consider:--mainnetflag takes precedence over explicit--derivation.rollupAddress.The current logic prioritizes
--mainnetover an explicitly provided rollup address. If both flags are set, the explicit address is silently ignored. This may be intentional, but could surprise users.Also,
PollIntervalhas no CLI flag to override the default—consider adding one for operational flexibility if needed.♻️ Optional: Warn or error if both flags are set
func (c *Config) SetCliContext(ctx *cli.Context) error { c.L1Addr = ctx.GlobalString(flags.L1NodeAddr.Name) // Determine RollupAddress: use explicit flag, or mainnet default, or error - if ctx.GlobalBool(flags.MainnetFlag.Name) { + isMainnet := ctx.GlobalBool(flags.MainnetFlag.Name) + hasExplicitRollup := ctx.GlobalIsSet(flags.RollupContractAddress.Name) + + if isMainnet && hasExplicitRollup { + // Log warning or return error about conflicting flags + } + + if isMainnet { c.RollupAddress = node.MainnetRollupContractAddress - } else if ctx.GlobalIsSet(flags.RollupContractAddress.Name) { + } else if hasExplicitRollup { c.RollupAddress = common.HexToAddress(ctx.GlobalString(flags.RollupContractAddress.Name)) } else {node/blocktag/service.go (2)
273-277: Consider usingErrorlog level for failed L1 query.Failing to retrieve
LastFinalizedBatchIndexis an unexpected condition that could mask state root mismatches. UsingInfolevel may cause this to go unnoticed in production.- s.logger.Info("Failed to get last finalized batch index, skipping state root validation", "error", err) + s.logger.Error("Failed to get last finalized batch index, skipping state root validation", "error", err)
385-388: Consider logging errors during binary search.Silently returning on RPC errors could mask intermittent L1 connectivity issues or contract call failures.
batchData, err := s.rollup.BatchDataStore(nil, big.NewInt(int64(mid))) if err != nil { + s.logger.Debug("Binary search batch lookup failed", "batchIndex", mid, "error", err) return result // Return best result so far on error }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
node/blocktag/config.gonode/blocktag/service.gonode/cmd/node/main.gonode/core/config.gonode/flags/flags.gonode/types/retryable_client.go
🧰 Additional context used
🧬 Code graph analysis (3)
node/cmd/node/main.go (2)
node/blocktag/service.go (2)
BlockTagService(33-61)NewBlockTagService(64-101)node/blocktag/config.go (1)
DefaultConfig(30-35)
node/blocktag/service.go (2)
node/types/retryable_client.go (1)
RetryableClient(29-34)node/blocktag/config.go (1)
Config(22-27)
node/blocktag/config.go (3)
oracle/flags/flags.go (1)
RollupAddress(35-40)node/core/config.go (1)
MainnetRollupContractAddress(30-30)node/flags/flags.go (4)
L1NodeAddr(63-68)MainnetFlag(238-241)RollupContractAddress(188-192)BlockTagSafeConfirmations(225-230)
🪛 GitHub Actions: Tx-submitter
node/types/retryable_client.go
[error] 227-227: Build error during 'make build': rc.authClient.SetBlockTags undefined (type *authclient.Client has no field or method SetBlockTags).
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: check
- GitHub Check: check
- GitHub Check: test
- GitHub Check: Analyze (rust)
- GitHub Check: Analyze (go)
🔇 Additional comments (15)
node/core/config.go (1)
28-31: LGTM!The mainnet rollup contract address constant is properly added, following the existing naming convention for mainnet-specific values. This provides a centralized reference for the BlockTag service configuration.
node/flags/flags.go (3)
63-68: LGTM!Making
L1NodeAddrrequired addresses the flag inconsistency issue whereBlockTagServicerequires the L1 RPC address but it was previously optional.
224-230: LGTM!The new
BlockTagSafeConfirmationsflag is properly defined with a sensible default value (10) that matchesDefaultSafeConfirmationsin the blocktag config.
350-352: LGTM!The flag is correctly wired into the global
Flagsslice.node/cmd/node/main.go (2)
186-188: LGTM!Proper nil-check before stopping the service ensures graceful shutdown regardless of which code path was taken during startup.
147-159: BlockTagService integration is correctly structured with no blocking dependencies.The service lifecycle management is correct:
- Configuration is populated from CLI context with proper error handling for required fields
- Service is created and started with error propagation
- Shutdown path properly checks for nil before stopping
The
SetBlockTagsmethod is implemented inRetryableClientand fully functional. The code builds successfully and should execute without issues. TheRollupContractAddressvalidation is appropriately handled byblocktag.Config.SetCliContext—either--mainnetor--derivation.rollupAddressmust be provided, with a clear error message if neither is set.Note:
L1NodeAddris not validated in the config and can be empty, though this will fail naturally at RPC connection time.node/blocktag/config.go (1)
14-27: LGTM!Constants and struct are well-defined. The defaults (10 confirmations, 10s poll interval) are reasonable starting points.
node/types/retryable_client.go (1)
225-240: Unable to verify the critical build failure claim from the codebase. TheSetBlockTagsmethod implementation follows the identical retry pattern as all 10 other methods inRetryableClient(AssembleL2Block, ValidateL2Block, NewL2Block, NewSafeL2Block, CommitBatch, AppendBlsSignature, BlockNumber, HeaderByNumber, CallContract, CodeAt), and the method is already being called innode/blocktag/service.go(line 453).The external
authclient.Clienttype comes fromgithub.com/morph-l2/go-ethereum v1.10.14-0.20251219060125-03910bc750a2as specified innode/go.mod. The go-ethereum submodule in this repository is empty in the sandbox, which prevents direct verification of the method definition. Confirm whether the actual build pipeline reports this error and, if so, check the go-ethereum fork repository directly to determine if theSetBlockTagsmethod exists onauthclient.Client.node/blocktag/service.go (7)
103-127: LGTM!The Start/Stop lifecycle management is well-implemented with proper context cancellation and graceful shutdown via the stop channel synchronization.
135-152: LGTM!Clean polling loop implementation with proper context cancellation handling and ticker cleanup.
199-267: LGTM!Well-structured method with clear separation of safe vs finalized tag handling, appropriate validation, and good logging.
316-361: LGTM!The recursion depth limit effectively prevents the infinite loop issue. The caching strategy with fallback to binary search is a reasonable optimization.
401-419: LGTM!The block number handling correctly distinguishes between the special
FinalizedBlockNumberconstant and actual block heights.
421-435: LGTM!Clean change-detection pattern that avoids redundant logging.
437-461: LGTM!Well-implemented notification deduplication that avoids redundant RPC calls and correctly handles the empty state.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| // If finalized > safe, update safe to finalized (finalized is a stronger state) | ||
| if finalizedBlockNum > safeBlockNum { | ||
| safeBlockHash = finalizedBlockHash | ||
| s.setSafeL2Block(safeBlockHash) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logic flaw when safe block retrieval fails.
When getL2BlockForTag(TagTypeSafe, ...) returns an error (line 166), safeBlockNum remains 0. If finalized retrieval succeeds with any positive block number, the condition finalizedBlockNum > safeBlockNum at line 178 will always be true, incorrectly overwriting the safe block even though the actual safe block wasn't determined.
🔧 Suggested fix
var safeBlockNum uint64
var safeBlockHash common.Hash
+ var safeBlockValid bool
// Update safe block
safeBlockNum, safeBlockHash, err = s.getL2BlockForTag(TagTypeSafe, l2Head)
if err != nil {
s.logger.Error("Failed to get safe L2 block", "error", err)
} else if safeBlockHash != (common.Hash{}) {
s.setSafeL2Block(safeBlockHash)
+ safeBlockValid = true
}
// Update finalized block
finalizedBlockNum, finalizedBlockHash, err := s.getL2BlockForTag(TagTypeFinalized, l2Head)
if err != nil {
s.logger.Error("Failed to get finalized L2 block", "error", err)
} else if finalizedBlockHash != (common.Hash{}) {
// If finalized > safe, update safe to finalized (finalized is a stronger state)
- if finalizedBlockNum > safeBlockNum {
+ if safeBlockValid && finalizedBlockNum > safeBlockNum {
safeBlockHash = finalizedBlockHash
s.setSafeL2Block(safeBlockHash)
}
s.setFinalizedL2Block(finalizedBlockHash)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // If finalized > safe, update safe to finalized (finalized is a stronger state) | |
| if finalizedBlockNum > safeBlockNum { | |
| safeBlockHash = finalizedBlockHash | |
| s.setSafeL2Block(safeBlockHash) | |
| } | |
| var safeBlockNum uint64 | |
| var safeBlockHash common.Hash | |
| var safeBlockValid bool | |
| // Update safe block | |
| safeBlockNum, safeBlockHash, err = s.getL2BlockForTag(TagTypeSafe, l2Head) | |
| if err != nil { | |
| s.logger.Error("Failed to get safe L2 block", "error", err) | |
| } else if safeBlockHash != (common.Hash{}) { | |
| s.setSafeL2Block(safeBlockHash) | |
| safeBlockValid = true | |
| } | |
| // Update finalized block | |
| finalizedBlockNum, finalizedBlockHash, err := s.getL2BlockForTag(TagTypeFinalized, l2Head) | |
| if err != nil { | |
| s.logger.Error("Failed to get finalized L2 block", "error", err) | |
| } else if finalizedBlockHash != (common.Hash{}) { | |
| // If finalized > safe, update safe to finalized (finalized is a stronger state) | |
| if safeBlockValid && finalizedBlockNum > safeBlockNum { | |
| safeBlockHash = finalizedBlockHash | |
| s.setSafeL2Block(safeBlockHash) | |
| } | |
| s.setFinalizedL2Block(finalizedBlockHash) | |
| } |
🤖 Prompt for AI Agents
In `@node/blocktag/service.go` around lines 177 - 181, The current logic can
overwrite the safe block when getL2BlockForTag(TagTypeSafe, ...) fails because
safeBlockNum stays 0; modify the flow in the function that calls
getL2BlockForTag to track whether the safe block was successfully retrieved
(e.g., a boolean safeFound or initialize safeBlockNum to -1) and only perform
the comparison/assignment to safeBlockHash and call s.setSafeL2Block(...) if the
safe retrieval succeeded; reference getL2BlockForTag(TagTypeSafe,...),
safeBlockNum, finalizedBlockNum, safeBlockHash and s.setSafeL2Block to locate
where to add the check and guard the finalize-vs-safe overwrite.
68ac8b6 to
d485931
Compare
d485931 to
9ae1755
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@node/blocktag/service.go`:
- Around line 307-361: The comment above findCompletedBatchForL2BlockWithDepth
incorrectly states maxDepth is 1 while the code sets const maxDepth = 2; update
the comment to reflect the actual value (maxDepth is 2) so docs and code match,
keeping the constant name maxDepth and the function
findCompletedBatchForL2BlockWithDepth unchanged.
- Around line 199-282: In getL2BlockForTag replace the printf-style
s.logger.Info call that uses "batch %d already finalized (lastFinalized=%d),
state root may be deleted" with structured key/value fields: remove formatting
placeholders and call s.logger.Info("batch already finalized, state root may be
deleted", "batchIndex", targetBatchIndex, "lastFinalized",
lastFinalizedBatchIndex.Uint64()); ensure the same pattern (message string plus
alternating key, value pairs) is used so the logger receives proper key/value
pairs rather than unpaired keys.
In `@node/types/retryable_client.go`:
- Around line 225-240: The code calls a non-existent method
authClient.SetBlockTags from RetryableClient.SetBlockTags; either add that
method to the authclient.Client API or change RetryableClient to call the
correct existing method. Update the RetryableClient.SetBlockTags implementation
to invoke the actual method exposed by authclient.Client (or add a new
SetBlockTags method to the authclient.Client interface and its implementations)
so the call rc.authClient.SetBlockTags(...) is valid; keep the retry/backoff
logic around the correct call and preserve error handling via retryableError and
rc.logger.
♻️ Duplicate comments (2)
node/blocktag/service.go (2)
63-101: Close the L1 client when rollup binding fails.If
bindings.NewRolluperrors, the already-open L1 client is leaked on this error path.🧹 Proposed fix
rollup, err := bindings.NewRollup(config.RollupAddress, l1Client) if err != nil { + l1Client.Close() return nil, fmt.Errorf("failed to create rollup binding: %w", err) }
154-197: Guard safe overwrite when safe fetch fails.If safe lookup errors,
safeBlockNumstays 0 and finalized will always overwrite safe, which can regress safe on transient failures. Track whether safe retrieval succeeded before comparing.✅ Suggested fix
var safeBlockNum uint64 var safeBlockHash common.Hash +var safeFetched bool // Update safe block safeBlockNum, safeBlockHash, err = s.getL2BlockForTag(TagTypeSafe, l2Head) if err != nil { s.logger.Error("Failed to get safe L2 block", "error", err) -} else if safeBlockHash != (common.Hash{}) { - s.setSafeL2Block(safeBlockHash) +} else { + safeFetched = true + if safeBlockHash != (common.Hash{}) { + s.setSafeL2Block(safeBlockHash) + } } // Update finalized block finalizedBlockNum, finalizedBlockHash, err := s.getL2BlockForTag(TagTypeFinalized, l2Head) if err != nil { s.logger.Error("Failed to get finalized L2 block", "error", err) } else if finalizedBlockHash != (common.Hash{}) { // If finalized > safe, update safe to finalized (finalized is a stronger state) - if finalizedBlockNum > safeBlockNum { + if safeFetched && finalizedBlockNum > safeBlockNum { safeBlockHash = finalizedBlockHash s.setSafeL2Block(safeBlockHash) } s.setFinalizedL2Block(finalizedBlockHash) }
| // findCompletedBatchForL2Block finds the largest batch where lastL2Block <= l2BlockNum. | ||
| // Uses cached index for optimization: first call binary search, subsequent calls search forward. | ||
| // Separate caches for safe and finalized to avoid conflicts. | ||
| func (s *BlockTagService) findCompletedBatchForL2Block(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64) (uint64, uint64, error) { | ||
| return s.findCompletedBatchForL2BlockWithDepth(tagType, l2BlockNum, maxBatchIndex, 0) | ||
| } | ||
|
|
||
| // findCompletedBatchForL2BlockWithDepth is the internal implementation with recursion depth limit. | ||
| // maxDepth is set to 1 to allow one retry after cache reset. | ||
| func (s *BlockTagService) findCompletedBatchForL2BlockWithDepth(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64, depth int) (uint64, uint64, error) { | ||
| const maxDepth = 2 | ||
|
|
||
| if maxBatchIndex == 0 { | ||
| return 0, 0, fmt.Errorf("no batches available") | ||
| } | ||
|
|
||
| // Get cached index based on tag type | ||
| startIdx := s.getCachedBatchIndex(tagType) | ||
| if startIdx == 0 || startIdx > maxBatchIndex { | ||
| // First time or cache invalid: use binary search to find starting point | ||
| startIdx = s.binarySearchBatch(l2BlockNum, maxBatchIndex) | ||
| if startIdx == 0 { | ||
| return 0, 0, fmt.Errorf("no completed batch found for L2 block %d", l2BlockNum) | ||
| } | ||
| } | ||
|
|
||
| // Search forward from startIdx | ||
| var resultIdx, resultLastL2Block uint64 | ||
| for idx := startIdx; idx <= maxBatchIndex; idx++ { | ||
| batchData, err := s.rollup.BatchDataStore(nil, big.NewInt(int64(idx))) | ||
| if err != nil { | ||
| return 0, 0, fmt.Errorf("failed to get batch data for index %d: %w", idx, err) | ||
| } | ||
|
|
||
| lastL2Block := batchData.BlockNumber.Uint64() | ||
| if lastL2Block <= l2BlockNum { | ||
| resultIdx = idx | ||
| resultLastL2Block = lastL2Block | ||
| s.setCachedBatchIndex(tagType, idx) | ||
| } else { | ||
| break | ||
| } | ||
| } | ||
|
|
||
| // Handle L2 reorg: if cache was too new, reset and use binary search | ||
| if resultIdx == 0 { | ||
| if depth >= maxDepth { | ||
| return 0, 0, fmt.Errorf("no completed batch found for L2 block %d after retry", l2BlockNum) | ||
| } | ||
| s.setCachedBatchIndex(tagType, 0) | ||
| return s.findCompletedBatchForL2BlockWithDepth(tagType, l2BlockNum, maxBatchIndex, depth+1) | ||
| } | ||
|
|
||
| return resultIdx, resultLastL2Block, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align the maxDepth comment with the actual value.
The comment says maxDepth is 1, but the constant is 2. Please align the comment or the value.
✏️ Example fix (comment-only)
-// maxDepth is set to 1 to allow one retry after cache reset.
+// maxDepth is set to 2 to allow up to two retries after cache reset.
const maxDepth = 2🤖 Prompt for AI Agents
In `@node/blocktag/service.go` around lines 307 - 361, The comment above
findCompletedBatchForL2BlockWithDepth incorrectly states maxDepth is 1 while the
code sets const maxDepth = 2; update the comment to reflect the actual value
(maxDepth is 2) so docs and code match, keeping the constant name maxDepth and
the function findCompletedBatchForL2BlockWithDepth unchanged.
| func (rc *RetryableClient) SetBlockTags(ctx context.Context, safeBlockHash common.Hash, finalizedBlockHash common.Hash) (err error) { | ||
| if retryErr := backoff.Retry(func() error { | ||
| respErr := rc.authClient.SetBlockTags(ctx, safeBlockHash, finalizedBlockHash) | ||
| if respErr != nil { | ||
| rc.logger.Info("failed to call SetBlockTags", "error", respErr) | ||
| if retryableError(respErr) { | ||
| return respErr | ||
| } | ||
| err = respErr | ||
| } | ||
| return nil | ||
| }, rc.b); retryErr != nil { | ||
| return retryErr | ||
| } | ||
| return | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for SetBlockTags in the codebase
echo "=== Searching for SetBlockTags ==="
rg -n "SetBlockTags" --type go
echo ""
echo "=== Searching for SetBlockTag (singular) ==="
rg -n "SetBlockTag" --type go
echo ""
echo "=== Searching for authclient package ==="
fd -t f "authclient" --type go | head -20
echo ""
echo "=== Look for authclient.Client definition ==="
rg -n "type Client struct" --type go -A 20 -B 2Repository: morph-l2/morph
Length of output: 1426
🏁 Script executed:
#!/bin/bash
# Search for authclient more broadly
echo "=== Searching for authclient imports and usage ==="
rg -n "authclient\|authClient" --type go -B 2 -A 2 | head -100
echo ""
echo "=== Search for authclient.Client definition ==="
rg -n "type.*Client.*struct|type.*Client.*interface" --type go | grep -i auth
echo ""
echo "=== Find where authClient is instantiated in RetryableClient ==="
rg -n "type RetryableClient" --type go -A 10
echo ""
echo "=== Search for Client interface in authclient ==="
find . -name "*authclient*" -o -name "*auth_client*" 2>/dev/null | head -20Repository: morph-l2/morph
Length of output: 1074
🏁 Script executed:
#!/bin/bash
# Check imports in retryable_client.go
echo "=== Check imports in retryable_client.go ==="
head -30 node/types/retryable_client.go | grep -E "import|authclient"
echo ""
echo "=== Look for authclient import block ==="
rg -n "import" node/types/retryable_client.go -A 20
echo ""
echo "=== Search for authclient in go.mod ==="
grep -i authclient go.mod 2>/dev/null || echo "No authclient found in go.mod"
echo ""
echo "=== Search for SetBlockTags definition (not just calls) ==="
rg -n "func.*SetBlockTags" --type go
echo ""
echo "=== Look for authclient package definition anywhere ==="
rg -n "^package authclient" --type go
rg -n "package.*auth" --type go | head -20Repository: morph-l2/morph
Length of output: 1280
🏁 Script executed:
#!/bin/bash
# Check if go-ethereum is vendored
echo "=== Check for vendored go-ethereum ==="
find . -type d -name "go-ethereum" 2>/dev/null | head -5
echo ""
echo "=== Check go.mod for go-ethereum ==="
rg -n "go-ethereum" go.mod
echo ""
echo "=== Look for all Client method definitions in repo ==="
rg -n "func.*Client.*SetBlockTags|func \(.*Client\).*SetBlockTags" --type go
echo ""
echo "=== Check service.go to see how it uses SetBlockTags ==="
sed -n '450,460p' node/blocktag/service.go
</script>Repository: morph-l2/morph
Length of output: 859
🏁 Script executed:
#!/bin/bash
# Search vendored go-ethereum for authclient
echo "=== Look for authclient in vendored go-ethereum ==="
find ./go-ethereum -name "*authclient*" -o -name "*auth_client*" 2>/dev/null
echo ""
echo "=== Search vendored authclient for SetBlockTags ==="
rg -n "SetBlockTags" ./go-ethereum --type go
echo ""
echo "=== Search vendored authclient for Client definition ==="
rg -n "type Client struct" ./go-ethereum/ethclient --type go -A 15 | head -50
echo ""
echo "=== List methods in vendored authclient ==="
find ./go-ethereum/ethclient/authclient -name "*.go" 2>/dev/null | xargs grep -h "^func" 2>/dev/null | head -20Repository: morph-l2/morph
Length of output: 346
🏁 Script executed:
#!/bin/bash
# Explore go-ethereum structure
echo "=== List top-level contents of go-ethereum ==="
ls -la ./go-ethereum/ | head -20
echo ""
echo "=== Search entire go-ethereum for authclient ==="
find ./go-ethereum -type d -name "*auth*" 2>/dev/null
echo ""
echo "=== Search entire go-ethereum for SetBlockTags ==="
rg -n "SetBlockTags" ./go-ethereum 2>/dev/null | head -20
echo ""
echo "=== Find any Client definition with SetBlockTags in go-ethereum ==="
rg -n "func.*SetBlockTags" ./go-ethereum 2>/dev/nullRepository: morph-l2/morph
Length of output: 386
🏁 Script executed:
#!/bin/bash
# Check how l2Client is defined in service.go
echo "=== Find l2Client definition in blocktag/service.go ==="
rg -n "l2Client" node/blocktag/service.go -B 2 -A 2 | head -30
echo ""
echo "=== Look for type definition of BlockTagService ==="
rg -n "type.*BlockTagService.*struct" node/blocktag/service.go -A 10
echo ""
echo "=== Check where l2Client is assigned ==="
rg -n "l2Client\s*=" node/blocktag/service.go -B 2 -A 2
echo ""
echo "=== Check if SetBlockTags is interface or concrete method ==="
rg -n "SetBlockTags" node/ -B 5 -A 5 | head -80Repository: morph-l2/morph
Length of output: 3217
Build break: authclient.Client has no SetBlockTags method.
The code calls rc.authClient.SetBlockTags() at line 227, but this method does not exist on authclient.Client (imported from "github.com/morph-l2/go-ethereum/ethclient/authclient"). Either add the method to the dependency or route this call differently.
🧰 Tools
🪛 GitHub Actions: Tx-submitter
[error] 227-227: rc.authClient.SetBlockTags undefined (type *authclient.Client has no field or method SetBlockTags)
🤖 Prompt for AI Agents
In `@node/types/retryable_client.go` around lines 225 - 240, The code calls a
non-existent method authClient.SetBlockTags from RetryableClient.SetBlockTags;
either add that method to the authclient.Client API or change RetryableClient to
call the correct existing method. Update the RetryableClient.SetBlockTags
implementation to invoke the actual method exposed by authclient.Client (or add
a new SetBlockTags method to the authclient.Client interface and its
implementations) so the call rc.authClient.SetBlockTags(...) is valid; keep the
retry/backoff logic around the correct call and preserve error handling via
retryableError and rc.logger.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@node/blocktag/service.go`:
- Around line 242-246: The rollup contract calls (e.g.,
s.rollup.LastFinalizedBatchIndex(nil)) currently pass nil CallOpts and ignore
cancellation; change these to pass a CallOpts that uses the service context so
RPCs cancel on shutdown — for example replace nil with &bind.CallOpts{Context:
s.ctx} (apply the same pattern to the other rollup calls referenced such as the
calls at the locations around lines invoking s.rollup at/near 294, 337, 386) so
all contract calls use s.ctx for cancellation.
♻️ Duplicate comments (3)
node/blocktag/service.go (3)
77-85: Potential resource leak: L1 client not closed on subsequent error.If
bindings.NewRollupfails at line 82-85, thel1Clientcreated at line 77 is not closed.🔧 Suggested fix
l1Client, err := ethclient.Dial(config.L1Addr) if err != nil { return nil, fmt.Errorf("failed to connect to L1: %w", err) } rollup, err := bindings.NewRollup(config.RollupAddress, l1Client) if err != nil { + l1Client.Close() return nil, fmt.Errorf("failed to create rollup binding: %w", err) }
161-183: Logic flaw when safe block retrieval fails.When
getL2BlockForTag(TagTypeSafe, ...)returns an error (line 166),safeBlockNumremains 0. If finalized retrieval succeeds with any positive block number, the conditionfinalizedBlockNum > safeBlockNumat line 178 will always be true, incorrectly overwriting the safe block even though the actual safe block wasn't determined.🔧 Suggested fix
var safeBlockNum uint64 var safeBlockHash common.Hash + var safeBlockValid bool // Update safe block safeBlockNum, safeBlockHash, err = s.getL2BlockForTag(TagTypeSafe, l2Head) if err != nil { s.logger.Error("Failed to get safe L2 block", "error", err) } else if safeBlockHash != (common.Hash{}) { s.setSafeL2Block(safeBlockHash) + safeBlockValid = true } // Update finalized block finalizedBlockNum, finalizedBlockHash, err := s.getL2BlockForTag(TagTypeFinalized, l2Head) if err != nil { s.logger.Error("Failed to get finalized L2 block", "error", err) } else if finalizedBlockHash != (common.Hash{}) { // If finalized > safe, update safe to finalized (finalized is a stronger state) - if finalizedBlockNum > safeBlockNum { + if !safeBlockValid || finalizedBlockNum > safeBlockNum { safeBlockHash = finalizedBlockHash s.setSafeL2Block(safeBlockHash) } s.setFinalizedL2Block(finalizedBlockHash) }
315-318: Align themaxDepthcomment with the actual value.The comment says
maxDepthis set to 1, but the constant is 2.✏️ Fix
// findCompletedBatchForL2BlockWithDepth is the internal implementation with recursion depth limit. -// maxDepth is set to 1 to allow one retry after cache reset. +// maxDepth is set to 2 to allow up to two retries after cache reset. func (s *BlockTagService) findCompletedBatchForL2BlockWithDepth(tagType BlockTagType, l2HeaderNum uint64, lastCommittedBatchIndex uint64, depth int) (uint64, uint64, error) { const maxDepth = 2
🧹 Nitpick comments (1)
node/blocktag/service.go (1)
386-389: Consider logging errors in binary search.When
BatchDataStorefails, the error is silently swallowed and the best result so far is returned. This could mask underlying RPC issues or contract problems.♻️ Suggested improvement
batchData, err := s.rollup.BatchDataStore(nil, big.NewInt(int64(mid))) if err != nil { + s.logger.Debug("Binary search batch lookup failed", "batchIndex", mid, "error", err) return result // Return best result so far on error }
| lastFinalizedBatchIndex, err := s.rollup.LastFinalizedBatchIndex(nil) | ||
| if err != nil { | ||
| s.logger.Info("Failed to get last finalized batch index, skipping state root validation", "error", err) | ||
| return 0, common.Hash{}, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Contract calls don't respect context cancellation.
The rollup.LastFinalizedBatchIndex(nil) call (and similar calls at lines 294, 337, 386) passes nil for CallOpts, so these RPC calls won't be cancelled when s.ctx is cancelled. This could cause the service to hang during shutdown.
🔧 Suggested fix for this occurrence
- lastFinalizedBatchIndex, err := s.rollup.LastFinalizedBatchIndex(nil)
+ lastFinalizedBatchIndex, err := s.rollup.LastFinalizedBatchIndex(&bind.CallOpts{Context: s.ctx})Apply the same pattern to other rollup calls throughout the file.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| lastFinalizedBatchIndex, err := s.rollup.LastFinalizedBatchIndex(nil) | |
| if err != nil { | |
| s.logger.Info("Failed to get last finalized batch index, skipping state root validation", "error", err) | |
| return 0, common.Hash{}, nil | |
| } | |
| lastFinalizedBatchIndex, err := s.rollup.LastFinalizedBatchIndex(&bind.CallOpts{Context: s.ctx}) | |
| if err != nil { | |
| s.logger.Info("Failed to get last finalized batch index, skipping state root validation", "error", err) | |
| return 0, common.Hash{}, nil | |
| } |
🤖 Prompt for AI Agents
In `@node/blocktag/service.go` around lines 242 - 246, The rollup contract calls
(e.g., s.rollup.LastFinalizedBatchIndex(nil)) currently pass nil CallOpts and
ignore cancellation; change these to pass a CallOpts that uses the service
context so RPCs cancel on shutdown — for example replace nil with
&bind.CallOpts{Context: s.ctx} (apply the same pattern to the other rollup calls
referenced such as the calls at the locations around lines invoking s.rollup
at/near 294, 337, 386) so all contract calls use s.ctx for cancellation.
Summary by CodeRabbit
New Features
Configuration
Chores
✏️ Tip: You can customize this high-level summary in your review settings.